Reading � Davies, philosophy of mind in Grayling

Greg Detre

Saturday, 16 March, 2002

 

Introduction

philosophy of mind: ordinary everyday conception of mental phenomena

philosophy of psychology: questions that arise out of the scientific study of mental phenomena

how can beliefs be about things?

how can sensations have their subjective phenomenal character?

this chapter largely ignores epistemological questions

about the way in which mental processes enable us to gain knowledge

about our knowledge of mental states and processes themselves

our knowledge of other people�s minds

about our � apparently special � way of knowing our own mind

folk psychology = our everyday conceptual scheme

what kind of thing the basis of our abilty to describe, explain and predict what people experience, think and do

could be grounded in knowledge of a common-sense empirical theory about experiences, beliefs + actions � about how they come about, and what their typical consequences are

alternatively, our understanding of other people could depend fundamentally upon a capacity to identify with them in imagination

how do childern come to understand the mental lives of others (similar to folk psychology)?

emotions

Mind and brain

dualism � person = a composite of a material thing and and an immaterial thing

eliminativism � there are no such things as minds, and talk of distinctively mental properties and mental states cannot figure in a fully serious/scientific account of the natural (physical) world

reductionist � only physical properties + states are really real, but mental state + properties can be identified with physical counterparts

Dualism

Cartesian Dualism = �there are bodies, or material things, who essence is to be extended in space. and there are minds, which are immaterial things, whose essence is thinking� � elaborated in the Sixth Meditation

�I am� or �I exist� since �whenever I utter or conceive it in my mind, it is necessarily true� � even the mental act of doubting the proposition �I exist� already guarantees its truth

popular dualism = the body is a kind of container, and the mind/soul, is located inside

Descartes explicitly rejected the idea that the mind is present in the body �as a pilot is present in a ship�, since minds do not have spatial properties

substance dualism = �minds are things in their own right; things that can be counted: there is a difference, e.g. between one mind and two qualitatively similar minds� ???

C:\greg\academic\reading\phil\topics\mind\queens canada lectures on mind\Kim Chapter two �The Many Problems of Mental Causation�.htm

C:\greg\academic\reading\phil\topics\mind\hk lectures on mind\Dualism.htm

C:\greg\academic\reading\phil\topics\mind\hk lectures on mind\DualismPart II.htm

C:\greg\academic\reading\phil\topics\mind\consciousness\Strawson.html

property dualism = there is no immaterial substance but mental properties are still importantly different from, and cannot be fully explained in terms of, physical properties

Strawson, �Self, mind and body� � Cartesian dualism as �fundamental misconception�

first objection: it is so implausible that every statement about a person can be analysed into: a statement aboud a body and its occupancy of space; and the other about a mind and its thinking processes � although Cartesians might say that there are good reasons why our actual language does not have the resources that the analysis would need

second objection: Cartesian dualism says that a person is really composed of two on-sided things, whereas he thinks that a person is one two-sided thing

to say that minds are countable objects is to say that, where minds are concerned, there is a difference between numerical and merely qualitative identity � �there is not the slightest reason for thinking that this can be done [showing that the concept of a mind is more fundamental than the concept of a person, as Cartesians think]�

Behaviourism

analytical behaviourism = a doctrine about the meaning of our mental discourse

analyses/translates/reduces talk about emotions, sensations, beliefs, desires, hopes, wishes and the rest into complex talk about patterns of behaviour

any mental predication (e.g. �x is in pain�) is to be analysed as a bundle of hypothetical/conditional statements about how x would behave in various circumstances

avoid any further mental notions - describe the behaviour in temrs of the �trajectory carved out by a person�s body�

although associated with Ryle, he does not appear to impose a stringent/full reductionist standard upon himself (Hornsby points to �drawing consequences from the original proposition�, �skate warily�, �dwell in imagination on possible disasters� etc.)

mental talk does not have to be construed as being about states of an immaterial mind, but it we may not be able to do without mental terms altogether

problems:

does not give an adequate account of the causal roles of mental properties + states

it can explain mental states underlying observable behaviour (e.g. in terms of conditionals/predispositions), but not so easily how one mental state could cause/interact with further mental states

Davies argues that there is a reason of principle why we cannot aovid having to reintroduce terms for mental states into the analysis

behaviour is explained in terms of beliefs and desires. analysing mental predicates in terms of behaviour would require reintroducing mental terms, about what x desires, into the analysis(???)

general requirement for attribution of beliefs: cannot be correct if they merely summarise the creature�s/system�s dispositions to behave in various ways in various circumstances

is this an attack on Dennett�s argument of what it is to have a belief???

no no no � what Davies is saying is that �behaviour is explained in temrs of both beliefs + desires; given a particular belief, what behaviour is forthcoming depends upon what desires are present. So having a belief cannot itself be equated with a disposition to behave in a certain way� (see pg 284)

Chomsky�s objection about the creativity of language

see notes - Lyons on Chomsky

see Smart�s criticism about there being something about the inner constitution in virtue of which hypothetical (predictive) statements can be made below

how does analytical behaviourism explain/generate predictions??? operant conditioning etc.???

Central state materialism (identity theory)

central state materialism = identifies mental states and processes with physical states and process of the brain/CNS

Place (1956): accepted talk about propositional attitudes in terms of dispositions to behaviour, but rejected behaviourism for sensations and other conscious experience (for which �some sort of inner process story is unavoidable�)

objection: that talk about consciousness and sensations is not equivalent in meaning to talk about brain processes

but Place says his thesis is not a semantic one

�consciousness is to be identified with a brain process in the same way that a cloud is identified with a mass of tiny particles I suspension, or lightning is identified with a certain kind of electrical discharge, or temperature with the mean kinetic energy of molecules� (Davies) ???

what�s the difference between this and a semantic thesis (see questions below)???

it is not part of the meaning of the word �pain� that pain is c-fibre-firing, any more than it is part of the meaning of the word �lightning� that lightning is a kind of electrical discharge

Smart criticises Ryle because there should be something about the inner constitution of the system in virtue of which the hypothetical statement about behaviour is true

Smart: hypothetical statements cannot �float in free air� as Ryle holds, but �need a categorical, or relatively more categorical, basis�

Smart identified beliefs + desires as well as sensations + experiences with brain states/processes � makes it easier to give an account of the causal roles of mental states

type-type identity = a type of mental state (e.g. pain, whether in me/you/today/tomorrow) is identified with a type of physical state (e.g. firing of c-fibres)

token-token identity =

�My pain right now (token of the type pain) is identified with a particular physical occurrence (a token of some physical type) and that your pain tomorrow is identified with a particular physical occurrence in you � my pain today and your pain tomorrow are two tokens of the same mental type: they are both instances of pain. But the physical occurrence in me today and the physical occurrence in you tomorrow might be tokens of the same physical type (e.g. c-fibre firing) or of quite different physical types. If it turns out that the physical occurrences are of different types, then we still have a cose of token-token identity (since each token of a mental type is identified with a token of some physical type or other). But we do not have a case of type-type identity (since different tokens of the same mental type are identified with tokens of different physical types)�

objection:

if pain is identified with c-fibre firing, then a being lacking c-fibres is automatically classified as a being that does not experience pain

similarly, beings with very different material constitutions are automatically classified as not having beliefs, desires + other mental states

an analytical behaviourist might say:

central state materialism goes significantly beyond what is strictly required

� ??? xxx (pg 261)

Functionalism

see Churchland, Matter and consciousness

functionalism = a type of mental state is defined by a causal role

three components of this causal role:

1.       causal relations between the state and inputs from the environment

2.       causal relations between the state and behavioural outputs

3.       causal relations between the state and other mental states

(which is what separates functionalism from behaviourism)

thus, for a state to be a pain state (e.g.), is defined not by the particular physical nature of the state, but the role that the state plays

thus, functionalism avoids the charge of chauvinism against central state materialism

the same role could be played by different realizers

functionalism makes the strong claim that mental states can be defined/specified/individuated by way of their causal roles

� ??? xxx (pg 262-3)

it may not be immediately obvious whether or not functionalism is intended as a doctrine about the meanings of mental terms

two kinds of functionalism � analytic + scientific:

based on common-sense theory about mental terms (analytic)

= the names of mental states derive their meaning from platitudes/common knowledge about the causal relations of mental states, sensory stimuli and motor responses (Lewis)

based on scientific theory aboutmental states and their causal relations

= the functionalist analyses of mental terms would have the status of �substantive scientific hypotheses� (Block), rather than being based on/a matter of common knowledge

functionalism vs materialism

functionalism need not be regarded as a materialist alternative to dualism, since, in principle, causal roles might be realised by non-physical states of an immaterial system (pg 262)

to the extent that a mental state type is not identified with a physical state type, functionalism is not a type-type identity theory

since althoght functionalism does envisage that therie is osmethign physical in common between my pain today and my pain yesterday, and tewen my pain and your pain, even though ti is allowed that you an d I might not share the phsycisal commonality with all other creatures who experience pain � so, it�s there is a kind of type-type claim in the offing, thought is ia a relativised one

pain (the type) is c-fibre-firing (the type) in humans. The state (type) that realizes in humans the pain role specified by theory T is c-fibre-firing

� ??? xxx (pg 264-5)

on the other hand, to the extent that the realizers of mental state roles are always physical, functionalism is committed to token-token identity (between mental + physical occurrences)

but some functionalists say that token-token identity underdescribes functionalism

second-order reduction

third kind of functionalism (Putnam)

= mental states are functional states, where the example of a functional state is one of the states specified in the machine tabel of a Turing machine

a Turing machine = defined by a table that specifies, for each state of the machine and each permissible input, an output and a (next) state

this is a poorer version than Lewis�s, since it only allows one state at a time

that state would have to be analogous to a person�s total mental �set� at a given time, so cannot offer an account of causal relations amongst the mental states that a person has at a given time

Interlude: reductionist + non-reductionist materialism

all the broadly materialist views so far were reductionist views

� xxx (pg 266)

but a broadly materialist view of the mind might not postulate any correspondences between mental and physical/functional (= 2nd-order defintion in physical terms) types

a token-token broadly materialist identity thesis might claim that every particular mental occurrence is to be identified with a particular physical occurrence, without making any claim about what the physical occurrences identified with occurrences of a certain mental type may have in common

monism, e.g. every mental occurrence/event is also a physical occurrence/event

event dualism = the claim that mental events are distinct from physical events

the same as Cartesian dualism, since both mental and physical events might involve only material substances

Anomalous monism

anomalous monism (= an �event monism�) = while every mental event is also a physical event (monism), still there are no strict laws connecting the mental and physical domains

1.       i.e. no strict psychophysical laws

the mental and physical domains are subject to such different overarching constraints that there cannot be strict connections between them

in the physical domain � �constituve element� in a physical theory(???)

e.g. principle of transitivity for length

in the mental domain (beliefs, desires + other propositional attitudes) � �constitutive ideal of rationalty�

any assignment of propositional attitudes to a thinking subject must make the subject out to be fairly coherent, consistent and cogent, otherwise we forego the chance of treating them as persons

2.       �anomalism of the mental� = �there are no strict deterministic laws on the basis of which mental events can be predicted and explained�

this applies to propositional attitudes, but not to sensations

no strict correlations between mental (propositional attitude) types of event and physical types of event

3.       assumption: that there are strict laws governing physical events (making them predictable and explicable)

it�s a non-reductionist materialism, since it�s materialist, but does not identify mental with physical properties or analyse mental talk in other terms

� ??? xxx (pg 268-9)

possible queries:

about the first step, leading to the interim conclusion that there are no psychophysical laws

about the continuation of the argument, to the final conclusion that mental events are physical events

no argument for the nomological character of causation

� ??? xxx (pg 269+)

supervenience (Davidson)

= �that there cannot be two events alike in all physical respects but differing in some mental respect�

= �an object cannot alter in some mental respect without altering in some physical respect�

it is not just that objects do not, as a matter of fact, but cannot (necessarily) alter mentally without altering physically

like all modal claims, supervenience can be glossed in terms of possible worlds � �within a world� vs �across worlds� (which Davidson does not make) supervenience claim(??? pg 271)

� ??? xxx pg 271

Eliminativism

see �Consciousness The Hornswoggle Problem_ P_S_Churchland.htm�, �Quining qualia�

eliminativism/eliminative materialism

= there are no such things as beliefs, desires + intentions � mental talk should be eliminated from any serious discourse about the way the world works

= �our common-sense conception [folk psychology] of psychological phenomena consitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience� (Churchland)

developing neuroscience will reveal that our common-sense framework of psychological notions provides a false and radically misleading picture of cognition and the causes of behaviour

objections:

our common-sense conception is not a theory

but what�s a theory?

a theory = a body of generalisations

let�s grant that our everyday conception of the mental domain has some theoretical components

Churchland provides example generalisations which seem part of our common-sense conception, e.g. if x fears that p then x desires that not-p

but just because our theories are badly wrong (e.g. ordinary people might hold badly false theories about what philosophers are and what they do) does not mean that the theory�s ontology can be eliminated (i.e. no such people as philosophers) � you need an extra premise

moreover, Churchland�s generalisations do not make any commitment to one or other metaphysical view about the mental, so even if they were discovered to be radically false, it�s not entirely clear what should be eliminated � so the eliminativist argument needs more metaphysical premises

the argument for eliminativism begins from the plausibility of the claim that there will not turn out to be �vindicating match-ups� between folk psychological properteis and neuroscientific properties

based on two premises:

� ??? xxx (pg 273)

1.       statements about people and their propositional attitudes invoke internal states and events that enter into causal relationships(???)

rules out analytical behaviourism, and Dennett�s account of belief attributions(???)

2.       statements about propositional attitudes will only be true if �the concepts of folk psychology [find] vindicating match-ups in a matured neuroscience� (Churchland)

rules out non-reductionist material views (e.g. anomalous monism)

� ??? xxx (pg 274)

Preview: Intentionality and consciousness

intentionality/content/aboutness

our beliefs + desires are about objects + properties in the world

and some of our mental states, principally sensations + perceptual experiences, have a phenomenal aspect to them. they are conscious states in the sense that there is something that it is like to be in those states

Nagel: �true intentionality cannot occur in a being incapable of consciousness. The nature of this relation is very unclear to me, but its truth seems evident�

Intentionality

intentionality as what distinguishes the mental from the physical domain (Brentano)

for a materialist, how can one part of the physical world have significance, or meaning, or content? how can it be about another part?

Varieties of aboutness

Attitude aboutness

if Fiona believes that Venus is a planet

her belief is about an object, Venus, and a property, being a planet

propositional attitude states:

e.g. beliefs, desires, hopes, fears, wishes and intentions � have aboutness

they involve an attitude (believing, hoping etc.) towards a proposition (e.g. that the ice is thin)

= attitude aboutness

(some philosophers reserve �intentionality� for this type of aboutness)

attitude aboutness is very fine-grained

two beliefs can be different in content, even though they concern the same object + property

although �Fiona believes that Hesperus appears in the evening sky� and �Hesperus = Phosphorus�, we cannot validly infer that �Fiona believes tha Phosphorus appears in the evening sky�

belief reports exhibit the logical characteristics of intensionality

(as opposed to the aboutness of intentionality)

extensionality:

for any predicate, the set of objects of which the predicate is true is the extension of the predicate

two predicates that are true of just the same objects are said to be coextensive

extensional context:

a sentence containing a predicate constitutes an extensional context for the predicate if replacing the predicate by another predicate that is coextensive with it preserves the truth-value

(the substitution can be made salva veritate � preserving the truth value)

e.g. �triangle ABC has three equal angles� constitutes an extensional context for the predicate �has three equal angles� because you can swap in the coextensive predicate �has three equal sides� and the truth-value will stay the same

but �Fiona believes that triangle ABC has three equal angles� does not constitute an extensional predicate for �has three equal angles�, because if you substitute in the coextensive predicate �has three equal sides�, the truth-value of the sentence might well change

intensionality:

a context that is not extensional is said to be intensional

in first-order predicate calculus, all contexts for predicates are extensional � it�s an extensional language

this allows two other things to be done salva veritate:

1.       one name for an object can be substituted for another name of the object

2.       a constituent sentence of a complex sentence (e.g. a conjunction) can be replaced by another sentence with the same truth-value

(i.e. an extensional language is truth-functional)

these cannot be done when a language contains intensional contexts, as does the language of belief reports � see what happens when names + sentences are substituted in the examples above

Linguistic aboutness

linguistic aboutness = the aboutness/meaning of public language sentences, and of conventional signs + signals

sentences of a public language have aboutness

but a philosophical account of their aboutness may differ from the aboutness of belief states

some philosophers of language aim to give an analysis of the notion of public language meaning for sentences in terms of such notions as convention, intention and belief (Grice), e.g. the intention of getting the listener to form certain beliefs

apparently, this Gricean analytical programme cannot be applied a second time over to the states of intending and believing themselves, for propositional atittude states are not utterances performed by agents with communicative intentions � pg 278

is Grice saying that utterances have derived intentionality from the intensions + beliefs of the speakers (as a matter of convention)??? (cf the Chinese room and the derived intentionality of the programmers)

language of thought hypothesis: mental events as involving analogues of linguistic expressions, i.e. that occurrences of propositional attitudes involve tokens of symbols that have structural properties like those of linguistic expressions (Fodor)

Davies finds this plausible

but a Gricean account of linguistic aboutness cannot be transformed via the LOTH into an account of attitude aboutness

Indicator aboutness

phenomena that indicate something about the world, e.g. �those spots mean/indicate measles�, or the position of the petrol gauge etc.

but indicator aboutness seems to be different, because it does not appear to allow for the possibility of misrepresentation, i.e. we cannot consistently say �those clouds mean � or indicate � that it will rain; but in fact it will not rain�

two views about indicator aboutness:

another instance of aboutness derived from the aboutness of propositional attitudes

e.g. the aboutness of the clouds (meaning that it will rain) is inherited from the attitude aboutness of the belief that someone could form on the basis of observing the clouds

not dependent upon the propsoitional attitudes of observers

the clouds� aboutness is a fact, whether it is discovered or not

Davies thinks this is more plausible

indicator aboutness is a matter of a reliable �causal covariation� between events of two types (Dretske)

surely though there�s causal covariation between any two events, if you choose to see them as exemplars of an arbitrary type � the aboutness derives from the particular patterns/categorisations/similarities we (choose to/are designed to) see???

Experiential aboutness

perceptual experiences: present the world/objects as having certain properties

relates to attitude aboutness � form beliefs (conceptualised content � requires possession of a concept) on the basis of experiences (non-conceptual content)

Hamlyn considers the idea of non-conceptual content to be spurious

but we should distinguish between the content of experiences and the content of the beliefs we form as a result

also, experiential aboutness allows for the possibility of misrepresentation

Subdoxastic aboutness

subdoxastic aboutness = the unconscious psychological states (the processes that lead up to experience, and belief)

from Gk, �doxa� = opinion/belief

cf Stich, states which �play a role in the proximate causal history of beliefs, though they are not beliefs themselves� as subdoxastic states

mental representations: information-processing psychology requires there to be information in (e.g. sensory systems), about other states of the creature and the external world

unlike:

attitude aboutness: the creature does not need to possess the concepts used to specify the information being processed (non-conceptual content)

linguistic aboutness: mental representation are not produced by internal agents participating in a convention

indicator aboutness: possibilty of misrepresentation

experiential aboutness: not tied to consciousness

Searle is suspicious of the idea of genuine intentionality that is not tied to consciousness

Review

the hierarchy described of aboutnesses is debatable

The intentional stance

division between:

1.       taking the believer (the whole person/system) as fundamental

�for any thinking subject x, x believes that � if and only if ___�

2.       taking the belief state (an internal state of the system) as fundamental

�for any internal state s, s is a belief that � if and only if ___�

an account of the second can be wrapped inside an account like the first

one suggestion is that attitude aboutness is some kind of construct out of indicator aboutness, e.g.

�for any internal state s, s is a belief that there is a predator nearby if and only if the occurrence of s indicates that there is a predator nearby�

intentional system = �a system whose behaviour can be � at least sometimes � explained and predicted by relying on ascriptions to the system of beliefs and desires�

fits the first format, focusing on the whole person/system (like analytical behaviourism)

assumes:

1.       that the machine will function as it was designed to do

2.       we assume that the design is optimal

intentional stance

= �one predicts behaviour in such a case by ascribing to the system the possession of certain information and supposing it to eb directed by certain goals, and then by working out the most reasonable and appropriate action on the basis of these ascriptions and suppositions�

= �first you decide to treat the object whose behaviour is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs� (Dennett)

physical stance = make predictions that �are based on the acutal physical state of the particular object, and are worked out by applying whatever knowledge we have about the laws of nature� � without making intentional stance assumptions

design stance = make predictions �solely from knowledge or assumptions about the system�s functional design, irrespective of the physical constitution�

making just the first assumption:

1.       that the system will function as it was designed to do

e.g. assuming that software will function as described in the manual � it cannot predict/explain malfunctions (that requires you to descend to the physical stance)

thus we abstract away, in the:

design stance: from the details of the physical realization of a designed system

intentional stance: from the details of the design, and just assume that it�s sensible/rational

attribution of belief: x believes that snow is white if ascribing that belief to the system figures in the predictively successful adoption of the intentional stance towards the system

in a way, I think Dennett�s very broad definition of belief is interesting enough to merit being �belief*�, but it isn't what we mean by belief usually

this is a completely non-reductionist belief � it does not attempt to analyse/define belief in more basic terms, or to identify the property of belieiving with a behavioural, functional or neural property

it is not a form of analytical behaviourism, since having a given belief is not analysed in terms of a disposition to behave in some particular way

this is good news, because Davies thinks that analytical behaviourism is fundamentally flawed because it ignores the fact that beliefs on their own do not tell us anything about behaviour without goals for those beliefs to direct our behaviour towards

however, it is a bit behaviouristic, insofar as what makes the attribution to a subject of beliefs and other attitudes true is just the presence of patterns in that subject�s behaviour

we might see it as �supervienient behaviourism� (cf section 1.3(???)) � no mental difference without a behavioural difference � if two systems show exactly the same dispositions to behaviour, then the intentional stance licenses just the same belief + desire descriptions of the two systems

objection:

what about a robot using a very large look-up table to determine its behaviour? in the examples, its behaviour is set up to be as susceptible to prediction as a real person�s, but would it be right to attribute beliefs to a machine that �has the intelligence of a jukebox� (Block)

they think that this makes supervenient behaviourism inadequate � attributions of propositional attitudes do, after all then, impose some conditions upon the structure + processing inside the system

somehow, this points towards eliminativism... xxx ??? (pg 285)

it might be though that there are de facto reasons constraining the internal architecture, required for producing appropriately patterned behaviour in real time

Dennett entertains the LOTH (�the internal cognitive architecture of human beings involves symbol manipulation, like a computer�(???)) as the best explanation of the consistently patterned behaviour of human beings, but is not committed to it

whereas a theory which sees the LOTH as one of the commitments of our common-sense conception would be led to an eliminativist argument

a general argument to show that no philosophical theory should create the opportunity for an eliminiativist argument to get started, then that would count very much in favour of supervenient behaviourism

a philosophical argument showing that som eparticular cognitive architecture is a necessary condition of propositional attitudes would count decisively against supervenient behaviourism

I don't buy Block�s sci-fi look-up table robot as an objection to Dennett, because I don't think it would be possible � and arguably, if it was indistinguishable behaviourally from a human being, it would probably have intentionality derived from its maker

Covariation and teleology

Davies now considers an internal state-based interpretational approach to belief rather than a system-based one, like Dennett�s (see beginning of earlier section):

�for any internal state s, s is a belief that � if and only if ___�

try to give an account of the content of an internal state in terms of lawful covariation (i.e. correspondence???) between that state and some state of the world

cf indicator aboutness (no possibility of falsehood/misrepresentation)

but we need to permit misrepresentation

Cummins� mechanical device called �Locke� � it�s a TV camera that produce punch card �percepts� which are compared against stored punch card arbitrarily-named �concepts�

x represents y in Locke if and only if x is a punch pattern that occurs in a percept when, only when, and because Locke is confronted by y

this renders misrepresentation logically impossible

we could allow for malfunctions and performance limitations in Locke, or stipulate that the above statement only works under ideal circumstances

Davies considers Dretske�s magnetotaxic oxygen-fleeing (�teleological function�) bacteria, and that they seem to use magnetic north as an indicator of down away from the oxygen-rich water surface � they can misrepresent, e.g. if we put a bar magnet nearby

objections:

might not be able to generalise the analysis in terms of teleological function, since our beliefs aren't always about needs

it is not clear that this is a case of misrepresentation

covariation-plus-teleology theories seem to cut content too coarsely to yield an adequate account of attitude aboutness

Functional role and holism

functional role semantics (or �conceptual role�) = applying functionalism to the contents of propositional attitude states

�semantics� here = the aboutness of mental states (rather than the meanings of expressions in a language)

different kinds of functional role:

causal, computational and inferential roles

short- and long-armed � causal relations between states within the person/creature/system, vs long-range causal relations between the belief states + states of distal items

narrow content = a kind of content that is guaranteed to be preserved between systems that are internally the same, even if embedded in very different environments

inter-subjective synonymy = different states� having the same content

problem for functional role semantics is that no belief state in one person�s head stands in exactly the same causal relations as any belief state in another person�s head

it�s �intractably holistic� (Fodor & LePore)

it�s a kind of �use theory of content�:

e.g. a use theory of meaning in a public language = the meaning of an expression n a smatter of the (communicative) use to which the expression is put

for mental states, a use theory of content/aboutness = the content of an internal state is a matter of the use/role of that state in the overall cognitive economy of the thinking subject

we should not conceive of the expressions of the LOT (within functional role semantics) as having their content in virtue of the intentions with which they are used by little communicating agents inside our brains (I think this is what is meant ??? pg 292)

Block describes a thought�s representation�s �functional role� as �for it to be inferrable in certain contexts from certain other represetnations with their own functional roles, and for other represetnations with their own functional roles to be inferrable from it. But to be able to have these other representations with their own functional roles is to be able to have other thoughts corresponding to these other representations. Thus, having one thought involves being able to have others�

Davies� example is that in order to have thoughts about red at all, then you need to know things like �whatever is red is coloured� and �whatever is red is not green�

if we both think about a red vase, and you think that Granny will like it because of its colour, while I have no idea of her colour preferences and do not form the thought, then our thoughts about the red vase have different content (at a fine-grained level), since they play different functional roles

perhaps the only sense in which we can say that our public ascriptions of belief are the same is if we accept that they don't fully specify the contents of our thoughts, perhaps only the object (coarse-grained �referential content�) + properties

two-factor account of content (Block):

narrow, internal, short-armed functional role, holistic factor

external factor that depends on the way in which the subject is embedded in the environment - referential content � specifies objects + properties, publicly sharable (despite individuals� varying cognitive economies)

responses to the problems of lack of inter-subjective synonymy

1.       no two subject have exactly the same fine-grained contents, but the same (coarse-grained) referential content � but this seems too coarse

2.       demarcate some sub-part of the total functional role of a belief state (e.g. its inferential role) as essential to its content

apparently this is based on the (now considered spurious) analytic/synthetic distinction(???)

Concepts and possession conditions

assumption that we can reject some of Quine�s rejection of the analytic/synthetic distinction

thoughts = structured, abstract entities: fine-grained propositions/contents, built out of concepts as constituents (like a sentence is built up of words)

this does not assume the LOTH � the LOTH concerns states/events of thinking, and says that they involve not just structured contents, but structured vehicles of content

concept = the ability to think of an object or property in a certain way

close connection between having beliefs/having concepts

possession condition for a concept:

�for any thinking subject x, x possess the concept � if and only if ___�

derive a theory of concept possession from a given account of attitude aboutness

e.g. based on the intentional stance:

the concept of being white is a concept C such that: for any thinking subject x, x possesses the concept C if and only if x can be predictively attributed some beliefs in whose content the concept C is involved�

however, this does not individuate �the concept� (e.g. of being white), just �a concept�

or, based on the covariation account (even though it�s already been shown to be inadequate):

the concept of being white is a concept C such that: for any thinking subject x, x possesses the concept C if and only if there is an internal state of x that occurrs in x when, only when, and because x is confronted by something white�

but this does not allow for false beliefs, or imagined or absent objects

but it does individuate the concept of being white, from e.g. the concept of being green

Peacocke�s idea of a possession condition individuates a concept, and that certain inferential liaisons figure crucially in a subject�s possession of a given concept, e.g.

�the concept �and� is a concept C such that: for any thinking subject x, x possesses the concept C if and only if x finds these forms of inference compelling, without basing them on any further inference of information: from any two premisses A and B, ACB can be inferred; and from any premise ACB, each of A and B can be inferred�

assigns a privileged status to certain patterns of inference involving the concept

discusses this approach (formulate possession condition, use �such that� to individuate, replace the concept name with a variable �C�; cast in terms of thinking subjects rather than internal states), its advantages (non-reductionist, and the commitments it requires (may require underpinning by LOT, affinity with functional role semantics)

Theories of intentionality

considers that no reductionist theory (e.g. covariation, telological function, functional role) will be satisfactory as a theory of attitude aboutness (though perhaps of other types of aboutness)

perhaps consciousness is the primary locus of this resistance to reduction

Consciousness

most obvious objection to the materialisms is their failure to account of consciousness (�the most fundamental problem in the area� (Nagel))

An argument for elusiveness

�no matter how the form [of conscious experience] may vary � fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism � something it is like for the organism� (Nagel)

there may also be something it is like for that creature to be in some specific state

the question for central state materialism is why there should be something that it is like for certain processes to be occurring in our brains

Nagel thinks we do not not even know how to begin to answer this:

�if mental processes are indeed physical processes, then there is something that it is like, intrinsically, to undergo certain physical processes. What it is for such a thing to the case remains a mystery�

�if physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective hcaracter, it seems that such a result is impossible [because] every subjective phenomenon is essentially connected with a single point of view and it seems inevitable that an objective physical theory will abandon that point of view�

by �point of view�

= something that is shared by many individuals in virtue of their having similar perceptual systems

= constitutes a kind of limitation upon what is conceivable for an individual

because our point of view is so different, we might not be able to form a conception of the subjective character of a bat�s conscious experience

he is contrasting these facts about the subjective character of experience (where (in)accessibility of the facts is sensitive to our point of view (and so on our particular perceptual systems)) with (e.g.) physics or neurophysiology (which are more insensitive to point of view)

if phenomenological facts have this property that physical facts lack, they are not the same

potential ambiguity about facts: how fine-grained are they?

e.g. is recognising Venus in the morning vs evening sky the same fact, given that people might be able to do one without the other?

are facts as fine-grained as thoughts (attitude aboutness � at the Fregean level of sense), or more like coarse-grained states of affairs (referential content � at the Fregean level of reference)

does someone who has only ever seen Hesperus, but knows that it also appears in the morning etc., have an incomplete account?

Davies thinks not, since the inaccessible thoughts (about Phosphorus) are thoughts about the very same item that the accessible thoughts are about

arguably then, Nagel�s argument does not show that physicalism provides an incomplete account of the world � only that certain physical states can be thought of in different ways � the difference between the physical and phenomenal lies at the level of sense, but does not demonstrate a difference at the level of reference

Nagel thinks though that conscious experience is different � with a planet, there is a clear separation between the object as it is and the way that the object appears (the �mode of presentation�) � but with experience, there is no separation between the way that the experience is and the way that it appears

An announcement of mystery

Davies says that there is a fair measure of consensus that Nagel�s argument cannot establish the metaphysical conclusion that conscious experiences are not identical with physical processes in the brain

but even if you allow the identity, �no presently available conception gives us a clue how this could be done. The problem is unique� (Nagel � also Block, Jackson, McGinn � there may be much about the way the world works that lies beyond our human understanding)

McGinn � how the brain gives rise to phenomenal consciousness is beyond our cognitive grasp. he considers there to be two putative routes to a grasp of the neural basis of consciousness:

1.       hope for an introspective fix upon the explanatory basis of phenomenal consciousness

but �introspection does not present conscious states as depending upon the brain in some intelligible way� (McGinn)

Davies wonders why McGinn does not also close off the �inference to the best explanation� route that he uses below here

he thinks that trying this would be almost question-begging, because it�s too close to the question that the argument as a whole is supposed to answer

2.       scientific study of the brain

but:

a)      ordinary perception of the brain does not bring us up against any such property

Davies thinks this is fair enough

what about just trying to find basic correlations between neuroimaging activity, say, and reported conscious activity??? these observable properties might allow us to talk in terms of equating phenomenal experience with something less concretely physical (e.g. computational properties etc.)

b)      no such property is going to be introduced by inference to the best explanation from perceptible properties of the brain

i.e. consciousness does not appear in the data, and so we�ll only need theoretical properties in our explanation

thus, neither experience nor theory yields us any grasp upon a natural property of the brain that can explain consciousness

see Flanagan ch 6 for a detailed critique

Higher-order thoughts

demystify the phenomenal aspect of consciousness by claiming that consciousness is a matter of thought about mental states

Rosenthal:

mental states have either intentional or phenomenal properties

it is important that both these notions do not already involve the notion of consciousness, otherwise the argument becomes circular

it is plausible that only mental states have these properties non-derivatively

conscious states are a subclass of mental states

being conscious of something = �just a matter of our having a thought of some sort about it�

thus consciousness (i.e. conscious mental states) = for the subject of that state to have a thought about it [its own mental state]

i.e. a �higher-order thought�

in this way, consciousness is not that mysterious

problems:

1.       we are largely unaware of higher-order thoughts when in conscious states

but all that means is that the higher-order thoughts (making you conscious of the low-order thoughts) are not themselves the object of even higher-order thoughts

after all, the analysis did not say that for a mental state to be conscious, the subject must have conscious thought about it � just that the subject must have a thought about it

2.       seems to allow for the coherence of the idea of unconscious sensations, i.e. a mental state with phenomenal properties might occur without being accompanied by a thought about that state � seems counter-intuitive

but that�s not a serious problem, e.g. the persistent headache that you get distracted from

3.       Rosenthal worries most about the fact that we intuitively ascribe consciousness to the states of creatures that we would not credit with the power of thought

he responds:

a)      it does not require a very sophisticated thought to make a phenomenal mental state conscious

� xxx ??? pg 310

there is a query about the necessity of the account given a rich notion of thought, and a query about its sufficiency given a thin notion of thought

what about unconscious guilt about an unconscious belief? that�s higher order thought about the first-order belief, but still does not make the belief conscious (Peacocke)

hmmm???

b)      another sense of the term �conscious� means only that the creature is �awake, and mentally responsive to sensory stimuli�

Nagel would find this too weak a notion of consciousness

Interlude: explaining consciousness

demystification:

negative aspect � the sense of mystery surrounding consciousness results from temping fallacies + confusions that can be cleared up, and the explanatory gap will disappear with them (e.g. Dennett)

positive aspect � offering putative explanations of one or other property of conscious experience in neural terms

e.g. Churchland on colour � the neural coding of colour involves triples of activation values, in which the neural correlate of an experience of orange is closer to the neural correlate of an experience of red than to � blue

structural properties/features of experience/perception �might be more accessible to objective description, even though something would be left out� (Nagel)

Flanagan says that McGinn imposes an iimpossibly high standard on intelligibility�

arguably, Nagel et al. are asking for an explanation that presents an a priori logical connection between the explaining facts (about neural activations etc.) and the facts to be explained (about what it is like to have a given experience)

but on the other hand, statements of brute correlation are not explanations either

Davies thinks we may need a better understanding of the nature of explanation itself

Consciousness and intentionality

Searle thinks the missing ingredient in reductionist accounts of attitude aboutness is consciousness

�any intentional state is either actually or potentially a conscious intentional state�

all genuinely mental activity is either conscious or potentially so � all other activities of the brain are non-mental, phsyiological processes

the connection principle: �the ascription of an unconscious intentional phenomenon to a system implies that the phenomenon is in principle accessible to consciousness�, i.e. all genuine intentionality belongs to (potentially) conscious states

weak construal: psychological states that are inaccessible to consciousness have an importantly different kind of aboutness

strong construal (which is what Searle intends): supposedly psychological states that are inaccessible to consciousness do not really have aboutness at all, and do not deserve the label �mental�

aspectual shape: �whenever we perceive anything or think about anything, it is always under some aspects and not others that we perceive or think about that thing�, cf Fregean mode of presentation

intentionality requires aspectual shape

(�intrinsic intentional states � always have aspectual shapes�)

and aspectual shape requires consciousness

(�the aspectual feature cannot be exhaustively or completely characterised solely in terms of third person, behavioural or even neurophysiological predicates�, cf Nagel and experience)

Davies says this leads to the question of whether there is a close link between conceptualisation and consciousness

it is not obvious that there must be something that it is like to believe something

if we granted the important link between intentionality (attitude aboutness) + consciousness, then does Searle�s argument rule out other kinds of aboutness being less closely tied to consciousness?

if so, it threatens the legitimacy of cognitive science

but Davies says that it seems fairly clear that the argument for the connection principle does not itself cast any doubt upon a notion like subdoxastic aboutness � a challenge to the legitimcacy of that notion would have to be mounted from another direction??? pg 312-314

Mental causation

epiphenomenal = the mental properties of mental events are not causally potent in producing other mental/physical events

Jackson maintains that the phenomenal properties of experiences elude any physicalist theory of the world, and that they�re epiphenomenal:

�their possession or absence makes no difference to the physical world�

Causal relevance

on materialist views, it seems to be at the level of the more fundamental material properties that the real causal action takes place, while mental properties are a causally inert and irrelevant overlay

yet, intuitively it seems that our mental events do play a vital causal role

anomalous event dualism = mental events are composed out of physical events but are not individually identical with physical events

problems for causal potency on this account

Davidson�s four positions:

1.       causation is a relation between particular events � �event c caused event e

2.       every particular mental event is identical with some physical event (event monism) � consequently, every mental event has both mental and physical properties

3.       wherever there is causation there are causal laws ( nomological character of causation)

if c causes e then there is some property F of c and some property G of e, such that ther ei sa causal law linkig F-events and G-events

4.       causal relations between physical events are always subsumed by physical laws � if both the cause event c and the effect event e are physical events then there are physical properties F and G of c and e respectively, such that there is a causal law linking F-events and G-events

it seems that if the physical properties of the events are specified nomologically, then the mental properties are excess to requirements

yet at the same time, it feels as though the mental properties of the causal event (e.g. my deciding to raise my arm) are causally relevant

the question about epiphenomenalism is not whether mental events have effects, but whether the metnal properties of mental events are relevant to the causal relations that those events stand in

it seems tempting (e.g.) in anomalous monism to say that mental events cause their effects in virtue of their physical (rather than) mental properties

Davidson rejects this way of posing the problem, since causality �holds between [events] no matter how they are described�

Davies thinks actually that if two objects stand in a relation, you can ask which of their properties are relevant to their standing in the relation (e.g. colour being irrelevant to the relation being-longer-than (Kim))

so it is legitimate to distinguish between causally relevant and epiphenomenal properties of events

this does not solely apply to mental events, but also chemical, biological, geological or aerodynamic events too � but apparently it doesn't make sense to conclude in the same way that (e.g.) all chemical events are epiphenomenal

Fodor suggests then that is only in a special fundamental sense that chemical, or mental, properties are epiphenomenal, but they have a causal role in the everyday sense of causal relevance

�intentional causal laws� = that speak of (e.g.) decisions with the content of lifting one�s arm � these are not strict laws, but contain ceteris paribus clauses (like chemical laws)

special science laws can always be explained further in terms of basic science laws, but these are just the way the world works

Causation and supervenience

Davidson�s third proposition (the nomological character of causation) requries that wherever there is causation there are strict laws

Davidson doesn't draw a distinction between causally relevant and epiphenomenal properties, but if he did, then it would be the physical properties that operate under the strict laws, and that would be exclusively causally relevant

two responses:

1.       properties that are mentioned in either strict or non-strict laws can be causally relevant

i.e. Fodor�s causally responsible properties � but you need to explain why all of the properties of the event that are considered causally relevant do not then over-determine the effect

if anomalous monism allows for the notion of non-strict special science laws, this undermines the principle of the nomological character of causation

2.       Fodor�s presentation � supervenience

premises:

a)       �the causal powers of an event are entirely determined by its physical properties�

b)      �no intentional property is identical to any physical property�

he argues that you can't say that a mental event would have had the same causal powers, even if it hadn't had its intentional properties, so long as its physical properties were preserved, given the supervenience of the mental on the physical (�a change in mental properties is always accompanied by a change in physical properties�)

but this seems to require an across worlds supervenience claim, while Davidson�s is only a �within a world� claim (pg 320-1)

Explanatoriness and efficacy

Fodor�s nomological theory of causal responsibility = a property of an event is causally responsible if it is a property in virtue of which the event is subsumed under a causal law

mental/intentional properties are causally responsible to the extent that there are mental/intentional causal laws � there are such laws, even though they are not strict laws

is there any special problem about causal responsibility of intentional (as opposed to, e.g. chemical) properties?

intentional properties may be causally explanatory, without being causally efficacious properties (Jackson & Pettit), because they are relational (rather than intrinsic) properties of mental states + events

all the theories of content treat the intentional properties of internal states as �highly relational� (as opposed to merely �relatively intrinsic�) properties

suggests something problematic about the causal relevance of intentional properties

however, Jackson & Pettit�s examples seem to suggest that whenever a higher-level property can have variable realizations at a lower level, the higher-level property is not itself causally efficacious (including the special sciences)

threatens to rob us of causal efficacy altogether (except/unless a single fundamental level of physics)

or reduces down to the nomological theory of causal responsibility, which places intentional properties in the same boat as the special sciences

therefore, a useful distinction between efficacious and explanatory properties must be able to:

a)       allow that some non-fundamental properties can be efficacious

b)      reveal a prima facie problem with intentional properties being efficacious

 

Discarded

propositional attitudes = talk about beliefs, desires etc. (that can be fitted into a declarative sentence, i.e. thoughts about something expressed with language)

Questions

where do theory theory + simulation fit in???

with a thesis like analytical behaviourism, a semantic thesis (a thesis about the meaning of mental discourse), is it necessarily an ontological one as well, i.e. that the thing being reduced is what it�s being reduced to???

well, no, because it�s saying that the thing being reduced does't have an independent ontological status � it exists only as a way of talking about something else

so when Place says his identity theory is not a semantic thesis, is he really saying that neither half of the identity is fully reducible to the other???

what�s the difference between hypothetical/conditional/dispositional statements (e.g. in behaviourism)???

can you have type-token identity???

is the distinction between (brain) states + processes important???

what does Smart mean exactly by �categorical basis�???

why�s it called �central state� materialism???

because it focuses upon inner states the produce behaviour???

would it be possible, though, to ever equate pain in humans and Martians (e.g.), given how different the two systems will be???

I suppose just as you wouldn't expect Martians to feel pain in exactly the same way we do, you wouldn't expect to be able to find identical (but perhaps analogous) functional (i.e. causal) analogues

what is event dualism???

presumably, anomalous monism is an event dualism � it says that mental + physical events are distinct, in some way, but both instantiated materially�

so in what way are they distinct???

no no no no � anomalous monism is (�in contrast�) a monism � how does this help???

where do sensations fit in in Davidson�s idea of the mental domain???

what does rationality mean, in Davidson�s terms???

ideal vs constitutive elements???

strict vs ceteris paribus laws???

why can't neuroscience show that folk psychology supervenes on neuroscience???

is there a difference between supervenience and emergent properties???

are emergent properties necessarily to do with centralised-seeming behaviour like flocking, or simply an epistemological barrier between low- and high-level properties, which are completely reducible???

is it something to do with whether the boundaries line up at both levels???

how do Dennett and Churchland(s) line up???

to what extent are eliminativism about folk psychology and about qualia the same???

Churchland�s example generalisations of our common-sense conception are absurdly simple � I just don't buy the idea that our folk psychology can be codified as laws � although ultimately, even a multiple drafts model or something complex like that must be reducible to a set of probabilities � how is a simulation approach any richer???

if a theory is badly wrong, does it make sense though to say that its contents exist, but in a different way (e.g. ordinary people�s badly wrong theories about philosophers)??? surely you�re really talking about something new + different, but with the same name???

let�s say that (some) propositional attitudes supervene on physical states � that doesn't really commit us to very much, does it??? might there be some physical states that don't can be expressed(/reduced) propositionally???

does supervenience entail that there is a supervening state for every base state???

does the design stance require a teleology and/or artificer???

is there a parallel between Dennett�s three stances and Marr�s three levels???

can't analytical behaviourism overcome its undermining �reason of principle� by somehow accommodating both beliefs and desires in its account of behaviour (analytical behaviourism would then be a sort of �intentional+desirous� stance)???

what exactly is a finite state machine??? how is it different to a look-up table??? is having an unfolding sequence of discrete symbols part of it???

the LOTH says more than that the �internal cognitive architecture of human beings involves symbol manipulation� (Davies) � it makes a stronger claim about language-like/syntactic operations, doesn't it??? e.g. [with an inner code]

�a general argument to show that no philosophical theory should create the opportunity for an eliminativist argument to get started, then that would count very much in favour of supervenient behaviourism� � but isn't Dennett himself an eliminativist???

how legitimate is the idea of derived intentionality???

what�s the difference between a use theory and a functional role theory???

when Block says �having one thought involves being able to have others�, is this at all related to systematicity???

content vs intentionality??? two-factor vs derived???

how do you separate consciousness from the mind-body problem???

is the fine-grained/sense/thoughts vs coarse-grained/reference/states of affairs distinction the same as intensional vs extensional???

McGinn�s two-step argument against scientific study doesn't actually rule out epiphenomenalism???

well, it�s not intended to rule anything out at the metaphysical level, only to show that all accounts are epistemologically inaccessible to us

actually, that wasn't what I meant � it almost seems to assume epiphenomenalism, doesn't it, when it says that consciousness will not be introduced by inference to the best explanation from perceptible properties???

no, because it�s talking about explanation of perceptible properties, for which consciousness doesn't currently appear to be necessary

what about consciousness being introduced by inference to the best explanation from behaviour???

possibly, but I don't think that�s necessarily true though, do I�

what do people like Churchland say to McGinn�s argument about scientific theories not being able to explain consciousness???

she�s an eliminativist, so presumably just thinks that it will reduce down

is she saying that consciousness is analytically reducible to neuroscientific language???

�the ascription of an unconscious intentional phenomenon to a system implies that the phenomenon is in principle accessible to consciousness� � why can't we say the same about machine states???

why is subdoxastic aboutness non-conceptual content (apparently, unlike attitude aboutness, the creature does not need to possess the concepts used to specify the information being processed), given the definition of a concept as �the ability to think of an object or property in a certain way�???

is that a final/accepted definition of concept???

when you talk of the mental properties of mental events being causally potent in producing/causing other mental/physical events, what does causally potent mean??? does it mean that they in some way necessarily produce the effects, or simply that there is always some relation (constant connection) between the mental properties of the causal and effected event???

anomalous monism escapes from problems of free will, but it�s as incoherent as all monisms about how mental events are physical events � and I think its being a token identity theory definitely makes it difficult to decide whether mental events have causal potency

Davidson�s an event monist, right???

�if anomalous monism allows for the notion of non-strict special science laws, this undermines the principle of the nomological character of causation� � but surely the special science laws can be analysed/reduced to special science laws, so the principle remains???

how do you augment a supervenience claim from within a world to across worlds???

isn't Fodor�s problem with the nomological theory of causal responsibility that he�s ascribing non-strict law-like behaviour to the mental, or requiring it, while Davidson specifically denies it with his anomalism of the mental???